7 research outputs found
On conjugacy of Cartan subalgebras in extended affine Lie algebras
That finite-dimensional simple Lie algebras over the complex numbers can be
classified by means of purely combinatorial and geometric objects such as
Coxeter-Dynkin diagrams and indecomposable irreducible root systems, is
arguably one of the most elegant results in mathematics. The definition of the
root system is done by fixing a Cartan subalgebra of the given Lie algebra. The
remarkable fact is that (up to isomorphism) this construction is independent of
the choice of the Cartan subalgebra. The modern way of establishing this fact
is by showing that all Cartan subalgebras are conjugate.
For symmetrizable Kac-Moody Lie algebras, with the appropriate definition of
Cartan subalgebra, conjugacy has been established by Peterson and Kac. An
immediate consequence of this result is that the root systems and generalized
Cartan matrices are invariants of the Kac-Moody Lie algebras. The purpose of
this paper is to establish conjugacy of Cartan subalgebras for extended affine
Lie algebras; a natural class of Lie algebras that generalizes the
finite-dimensional simple Lie algebra and affine Kac-Moody Lie algebras
Recommended from our members
Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning
Using a model heat engine, we show that neural network-based reinforcement
learning can identify thermodynamic trajectories of maximal efficiency. We
consider both gradient and gradient-free reinforcement learning. We use an
evolutionary learning algorithm to evolve a population of neural networks,
subject to a directive to maximize the efficiency of a trajectory composed of a
set of elementary thermodynamic processes; the resulting networks learn to
carry out the maximally-efficient Carnot, Stirling, or Otto cycles. When given
an additional irreversible process, this evolutionary scheme learns a
previously unknown thermodynamic cycle. Gradient-based reinforcement learning
is able to learn the Stirling cycle, whereas an evolutionary approach achieves
the optimal Carnot cycle. Our results show how the reinforcement learning
strategies developed for game playing can be applied to solve physical problems
conditioned upon path-extensive order parameters
Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning
Using a model heat engine, we show that neural network-based reinforcement
learning can identify thermodynamic trajectories of maximal efficiency. We
consider both gradient and gradient-free reinforcement learning. We use an
evolutionary learning algorithm to evolve a population of neural networks,
subject to a directive to maximize the efficiency of a trajectory composed of a
set of elementary thermodynamic processes; the resulting networks learn to
carry out the maximally-efficient Carnot, Stirling, or Otto cycles. When given
an additional irreversible process, this evolutionary scheme learns a
previously unknown thermodynamic cycle. Gradient-based reinforcement learning
is able to learn the Stirling cycle, whereas an evolutionary approach achieves
the optimal Carnot cycle. Our results show how the reinforcement learning
strategies developed for game playing can be applied to solve physical problems
conditioned upon path-extensive order parameters.Comment: 11 pages, 5 figure
Recommended from our members
Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning
Using a model heat engine, we show that neural network-based reinforcement
learning can identify thermodynamic trajectories of maximal efficiency. We
consider both gradient and gradient-free reinforcement learning. We use an
evolutionary learning algorithm to evolve a population of neural networks,
subject to a directive to maximize the efficiency of a trajectory composed of a
set of elementary thermodynamic processes; the resulting networks learn to
carry out the maximally-efficient Carnot, Stirling, or Otto cycles. When given
an additional irreversible process, this evolutionary scheme learns a
previously unknown thermodynamic cycle. Gradient-based reinforcement learning
is able to learn the Stirling cycle, whereas an evolutionary approach achieves
the optimal Carnot cycle. Our results show how the reinforcement learning
strategies developed for game playing can be applied to solve physical problems
conditioned upon path-extensive order parameters
Recommended from our members
Optimizing thermodynamic trajectories using evolutionary and gradient-based reinforcement learning.
Using a model heat engine, we show that neural-network-based reinforcement learning can identify thermodynamic trajectories of maximal efficiency. We consider both gradient and gradient-free reinforcement learning. We use an evolutionary learning algorithm to evolve a population of neural networks, subject to a directive to maximize the efficiency of a trajectory composed of a set of elementary thermodynamic processes; the resulting networks learn to carry out the maximally efficient Carnot, Stirling, or Otto cycles. When given an additional irreversible process, this evolutionary scheme learns a previously unknown thermodynamic cycle. Gradient-based reinforcement learning is able to learn the Stirling cycle, whereas an evolutionary approach achieves the optimal Carnot cycle. Our results show how the reinforcement learning strategies developed for game playing can be applied to solve physical problems conditioned upon path-extensive order parameters